AI’s most dangerous myths, and the guardrails being put in place

Published on the 24/08/2023 | Written by Heather Wright


AI's most dangerous myths, and the guardrails being put in place

Will AI dumb down society?…

An Australian AI expert is calling for society to look beyond the hype of AI and more closely analyse the risks, saying AI has ‘invaded and colonised public imaginations’, but carries with it significant challenges and potential dangers, including the potential dumbing down of society.

Charles Darwin University AI expert Stefan Popenici says AI affects the use of data, impacts on privacy and is affecting the ability to think critically and creatively (which could, ironically, perhaps bolster entrants for upcoming Darwin awards).

“There is no creativity, no critical thinking, no depth or wisdom in what generative AI gives users.”

While he’s talking largely about AI in education, Popenici’s comments hold true for wider use of AI too, and his comments come as the Australian Human Rights Commission calls for an AI Act to protect human rights, and tech companies begin to introduce guardrails limiting how their AI tools can be used.

A McKinsey survey, the State of AI in 2023, found few companies are fully prepared for the widespread use of generative AI, or for the business risks the tools could bring.

It found just 21 percent of respondents had policies governing employees’ use of the technology at work, and just 32 percent were mitigating the commonly cited risk of inaccuracy.

In Popenici’s paper, The Critique of AI as a Foundation for Judicious Use in Higher Education, he says AI is inherently discriminatory, is not objective, factual or unbiased and its improper use is sending education into a global crisis.

The belief AI is objective, factual and unbiased when it is directly related to specific values, beliefs and biases, and the belief the technology doesn’t discriminate, are the two biggest myths about AI in education, Popenici says.

“It is deceiving to say, dangerous to believe, that artificial intelligence is intelligent. There is no creativity, no critical thinking, no depth or wisdom in what generative AI gives users after a prompt,” he says. “It is just plausible text with good syntax and grammar.”

“Intelligence, as a human trait, is a term that describes a very different set of skills and abilities, much more complex and harder to separate, label, measure and manipulate than any computing system associated with the marketing label of AI.”

Alongside discrimination, he outlines concerns over users’ privacy, with information and data provided by users collected to train and develop the models, and all data ‘potentially filed, used, aggregated and connected to a user’s identity’.

“Especially at a time when banks use data aggregated from the internet to decide a credit score, insurance companies decide premiums based on information sold by data brokers and all our lives are influenced by data collected on individuals with and without their knowledge, teachers have a duty of care to protect the privacy of their students and their future.”

Pepenici’s comments come amid increased scrutiny over the use of generative AI data use and more than a little AI fatigue.

This week, in an effort to alleviate some of those concerns, Salesforce has put guardrails around its AI services, with an updated AI acceptable use policy which outlines use cases for which its products are not allowed to be used – including deepfakes, individualised advice from licensed professionals including financial and legal advice and automated decision-making processes with legal effects.

The policy also notes that customers can’t use third-party services linked to Salesforce’s offerings for any of the forbidden purposes.

“It’s not enough to deliver the technological capabilities of generative AI, we must prioritise responsible innovation to help guide how this transformative technology can and should be used,” Salesforce chief ethical and humane use officer Paula Goldman says.

In June the company launched its Einstein GPT Trust Layer as part of its AI Cloud, and designed to alleviate some data privacy and compliance concerns. The Trust Layer provides large language models (LLM) with access to data, without needing to move that data into the LLM.

Meanwhile, the Australian Human Rights Commission has again called for immediate steps to be taken to regulate AI to protect individuals ‘from the unique risks posed by this technology’.

“Australia already has several pieces of legislation regulating AI usage in specific settings or circumstances. However, the regulatory environment for AI is patchwork, and regulatory gaps likely exist.”

The Commission says while it supports the creation of AI-specific legislation, it must not duplicate existing legislation.

Popenici has frequently spoken out about the use of AI, and ChatGPT, in education, arguing that the hype of ChatGPT has been distracting students, teachers and schools from the risks it poses to both critical thinking and academic integrity.

In a book published last November, Artificial Intelligence and Learning Futures: Critical Narratives of Technology and Imagination in Higher Education, he says AI in education is a ‘global crisis’ threatening the foundations and integrity of education facilities around the world.

But Popenici is also clear that AI is an integral part of education as it is in other areas of our lives.

“Banning or ignoring generative AI in education is an unrealistic, ingnorant and dangerous option… It is vital for educators to understand what AI is and what it is not, what is just hype and marketing, and make the difference between the real potential for beneficial use or selling points and propaganda,” he says.

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere